date()
## [1] "Thu Dec 3 23:09:48 2020"
How am I feeling? Currently I’m a bit exhausted with all the technical challenges I had with Git and Rstudio integration with several computers. After installing brand new Linux system things were going better though.
So the course started with
…but from this on I’m expecting
I expect to learn R and Git integration, and already did. Looking forward to hear ideas for some data analysis: I have experience with another software and R packages as well, but with R it’s sometimes difficult to see which packages and techniques are supposed or recommended to be used in specific tasks. Also it is interesting to see how approach in data science and conventional statistics differs.
I heard about this course when Kimmo was advertising it via Statnet mailing list.
1+1
## [1] 2
0/0
## [1] NaN
Square root of two i.e. \(\sqrt 2\) equals 1.4142136.
This week’s exercise is about regression analysis. Tasks include:
Source data: http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/learning2014.txt.
#Setting working path
setwd("/home/ls/R/projekteja/IODS-project/")
#Reading saved datafile as "excercise2"
exercise2_data <- read.table("./data/ex2.rData")
#Structure of data set (166 obs, 7 vars)
str(exercise2_data)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ Age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ Attitude: int 37 31 25 35 37 38 35 29 38 21 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ Points : int 25 12 24 10 22 21 21 31 24 26 ...
#First 10 observations (showing single string variable, 6 continuous variables),
head(exercise2_data,n=10)
## gender Age Attitude deep stra surf Points
## 1 F 53 37 3.583333 3.375 2.583333 25
## 2 M 55 31 2.916667 2.750 3.166667 12
## 3 F 49 25 3.500000 3.625 2.250000 24
## 4 M 53 35 3.500000 3.125 2.250000 10
## 5 M 49 37 3.666667 3.625 2.833333 22
## 6 F 38 38 4.750000 3.625 2.416667 21
## 7 M 50 35 3.833333 2.250 1.916667 21
## 8 F 37 29 3.250000 4.000 2.833333 31
## 9 M 37 38 4.333333 4.250 2.166667 24
## 10 F 42 21 4.000000 3.500 3.000000 26
summary(exercise2_data)
## gender Age Attitude deep stra
## F:110 Min. :17.00 Min. :14.00 Min. :1.583 Min. :1.250
## M: 56 1st Qu.:21.00 1st Qu.:26.00 1st Qu.:3.333 1st Qu.:2.625
## Median :22.00 Median :32.00 Median :3.667 Median :3.188
## Mean :25.51 Mean :31.43 Mean :3.680 Mean :3.121
## 3rd Qu.:27.00 3rd Qu.:37.00 3rd Qu.:4.083 3rd Qu.:3.625
## Max. :55.00 Max. :50.00 Max. :4.917 Max. :5.000
## surf Points
## Min. :1.583 Min. : 7.00
## 1st Qu.:2.417 1st Qu.:19.00
## Median :2.833 Median :23.00
## Mean :2.787 Mean :22.72
## 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :4.333 Max. :33.00
#Gender proportions: females 66.3%, males 33.7%
prop.table(table(exercise2_data$gender))
##
## F M
## 0.6626506 0.3373494
So that was basic information about data set, which consist of 166 observations and single factor, six continuous variables, with 66% of subjects being female. For more information, please visit http://www.helsinki.fi/~kvehkala/JYTmooc/JYTOPKYS3-meta.txt where the original data set described.
#Loading additional packages and plotting bivariate distributions
library(ggplot2)
library(GGally)
ggpairs(exercise2_data, aes(col=gender, alpha=0.3),
upper=list(continuous = wrap("cor", size=2.5)),
lower=list(combo=wrap("facethist", bins=25))) +
scale_fill_manual(values = c("red","blue"))
In graph above, red color indicates female gender, blue male. Correlation text size decreased for better fit.
Highest absolute correlation coefficient with Points are:
| Variables | Pearson R |
|---|---|
| Attitude * Points | 0.437 |
| stra * Points | 0.146 |
| surf * Points | -0.144 |
-> Let’s select these three as predictors. Points is the outcome.
glm1 <- lm(Points ~ Attitude + stra + surf, data=exercise2_data)
summary(glm1)
##
## Call:
## lm(formula = Points ~ Attitude + stra + surf, data = exercise2_data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.01711 3.68375 2.991 0.00322 **
## Attitude 0.33952 0.05741 5.913 1.93e-08 ***
## stra 0.85313 0.54159 1.575 0.11716
## surf -0.58607 0.80138 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
Attitude is the only statistically significant predictor for Points (p<0.001).
Surf is least significant predictor: let’s remove it from the model.
glm2 <- lm(Points ~ Attitude + stra, data=exercise2_data)
summary(glm2)
##
## Call:
## lm(formula = Points ~ Attitude + stra, data = exercise2_data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.6436 -3.3113 0.5575 3.7928 10.9295
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 8.97290 2.39591 3.745 0.00025 ***
## Attitude 0.34658 0.05652 6.132 6.31e-09 ***
## stra 0.91365 0.53447 1.709 0.08927 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared: 0.2048, Adjusted R-squared: 0.1951
## F-statistic: 20.99 on 2 and 163 DF, p-value: 7.734e-09
Attitude is still the only statistically significant predictor for Points (p<0.001).
Stra is not significant, but still p<0.1. We might leave the model as it is, but out of curiosity, let’s remove Stra anyway and see what happens.
glm3 <- lm(Points ~ Attitude, data=exercise2_data)
summary(glm3)
##
## Call:
## lm(formula = Points ~ Attitude, data = exercise2_data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.63715 1.83035 6.358 1.95e-09 ***
## Attitude 0.35255 0.05674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
Model parameter estimates interpretation:
Attitude is still highly significant predictor for Points (p<0.001).
Intercept equals 11.6. This is the value of Points where Attitude=0, based on model formula.
Slope parameter of Attitude equals to 0.352, i.e. single unit increase in Attitude reflects as 0.352 increase to Points.
Coefficents of determination: unadjusted 19.1%, adjusted 18.6%. Both are a bit lower than in previous model. But let’s leave it that way.
#Drawing diagnostic plots. Choosing the plots 1, 2 and 5. Full list of plot options:
#1. Residual vs fitted
#2. Normal QQ plot
#3. Scale-location
#4. Cooks distance
#5. Residuals vs. Leverage
#6. Cooks distances vs Leverage
#Defining plot matrix of 1x3. Pin defines 1:1 aspect ratio for each sub plot.
par(mfrow = c(1,3), pin=c(1.75,1.75))
plot(glm3, which=c(1,2,5))
Residual vs. fitted plot: Distibution is symmetric, there are no outliers. Looks fine.
Normal Quantile-Quantile plot: There is slight deviation from the diagonal at the ends. However, this is still not yet alarming at all.
Residual vs. Leverage plot: Highest Cook’s distance value is <0.05, i.e. low. No problems here.
Conclusion: model seems to be adequate, assumptions are met, based on visual examination. Attitude predicts value of Points.
This week’s exercise is about logistic regression analysis. Tasks include:
Source data: Secondary school student alcohol consumption in Portugal (P. Cortez and A. Silva. Using Data Mining to Predict Secondary School Student Performance). Center for Machine Learning and Intelligent Systems at the University of California, Irvine.
Data: https://archive.ics.uci.edu/ml/datasets/Student+Performance.
Description: https://archive.ics.uci.edu/ml/datasets/Student+Performance#.
#Setting working path
setwd("/home/ls/R/projekteja/IODS-project/")
#Reading the data created with "creatle_alc.R".
alc <- read.table("./data/alc.rData")
Loading library dplyr for further DM needs.
library(dplyr)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
#Variable dimensions, variable names and some values.
glimpse(alc)
## Rows: 370
## Columns: 51
## $ school <fct> GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP…
## $ sex <fct> F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F, F…
## $ age <int> 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15…
## $ address <fct> R, R, R, R, R, R, R, R, R, U, U, U, U, U, U, U, U, U, U, U…
## $ famsize <fct> GT3, GT3, GT3, GT3, GT3, GT3, GT3, LE3, LE3, GT3, GT3, GT3…
## $ Pstatus <fct> T, T, T, T, T, T, T, T, T, A, A, T, T, T, T, T, T, T, T, T…
## $ Medu <int> 1, 1, 2, 2, 3, 3, 3, 2, 3, 3, 4, 1, 1, 1, 1, 1, 2, 2, 2, 3…
## $ Fedu <int> 1, 1, 2, 4, 3, 4, 4, 2, 1, 3, 3, 1, 1, 1, 2, 2, 1, 2, 3, 2…
## $ Mjob <fct> at_home, other, at_home, services, services, services, ser…
## $ Fjob <fct> other, other, other, health, services, health, teacher, se…
## $ reason <fct> home, reputation, reputation, course, reputation, course, …
## $ guardian <fct> mother, mother, mother, mother, other, mother, father, mot…
## $ traveltime <int> 2, 1, 1, 1, 2, 1, 2, 2, 2, 1, 1, 3, 1, 1, 1, 1, 3, 1, 2, 1…
## $ studytime <int> 4, 2, 1, 3, 3, 3, 3, 2, 4, 4, 2, 1, 2, 2, 2, 2, 3, 4, 1, 2…
## $ schoolsup <fct> yes, yes, yes, yes, no, yes, no, yes, no, yes, no, no, no,…
## $ famsup <fct> yes, yes, yes, yes, yes, yes, yes, yes, yes, no, yes, yes,…
## $ activities <fct> yes, no, yes, yes, yes, yes, no, no, no, no, yes, yes, yes…
## $ nursery <fct> yes, no, yes, yes, yes, yes, yes, yes, no, yes, yes, no, n…
## $ higher <fct> yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, yes…
## $ internet <fct> yes, yes, no, yes, yes, yes, yes, yes, yes, no, yes, yes, …
## $ romantic <fct> no, yes, no, no, yes, no, yes, no, no, no, no, yes, no, no…
## $ famrel <int> 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 5, 5, 3, 3…
## $ freetime <int> 1, 3, 3, 3, 2, 3, 2, 1, 4, 3, 3, 3, 3, 4, 3, 2, 2, 1, 5, 3…
## $ goout <int> 2, 4, 1, 2, 1, 2, 2, 3, 2, 3, 2, 3, 2, 2, 2, 3, 2, 2, 1, 2…
## $ Dalc <int> 1, 2, 1, 1, 2, 1, 2, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1…
## $ Walc <int> 1, 4, 1, 1, 3, 1, 2, 3, 3, 1, 1, 2, 3, 2, 1, 2, 1, 1, 1, 1…
## $ health <int> 1, 5, 2, 5, 3, 5, 5, 4, 3, 4, 1, 4, 4, 5, 5, 1, 4, 3, 5, 3…
## $ n <int> 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2…
## $ id.p <int> 1096, 1073, 1040, 1025, 1166, 1039, 1131, 1069, 1070, 1106…
## $ id.m <int> 2096, 2073, 2040, 2025, 2153, 2039, 2131, 2069, 2070, 2106…
## $ failures <int> 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2…
## $ paid <fct> yes, no, no, no, yes, no, no, no, no, no, no, no, no, no, …
## $ absences <int> 3, 2, 8, 2, 5, 2, 0, 1, 9, 10, 0, 3, 2, 0, 4, 1, 2, 6, 2, …
## $ G1 <int> 10, 10, 14, 10, 12, 12, 11, 10, 16, 10, 14, 10, 11, 10, 12…
## $ G2 <int> 12, 8, 13, 10, 12, 12, 6, 10, 16, 10, 14, 6, 11, 12, 12, 1…
## $ G3 <int> 12, 8, 12, 9, 12, 12, 6, 10, 16, 10, 15, 6, 11, 12, 12, 14…
## $ failures.p <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1…
## $ paid.p <fct> yes, no, no, no, yes, no, no, no, no, no, no, no, no, no, …
## $ absences.p <int> 4, 2, 8, 2, 2, 2, 0, 0, 6, 10, 0, 6, 2, 0, 6, 0, 0, 4, 4, …
## $ G1.p <int> 13, 13, 14, 10, 13, 11, 10, 11, 15, 10, 15, 11, 13, 12, 13…
## $ G2.p <int> 13, 11, 13, 11, 13, 12, 11, 10, 15, 10, 14, 12, 12, 12, 12…
## $ G3.p <int> 13, 11, 12, 10, 13, 12, 12, 11, 15, 10, 15, 13, 12, 12, 13…
## $ failures.m <int> 1, 2, 0, 0, 2, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3…
## $ paid.m <fct> yes, no, yes, yes, yes, yes, no, yes, no, no, yes, no, no,…
## $ absences.m <int> 2, 2, 8, 2, 8, 2, 0, 2, 12, 10, 0, 0, 2, 0, 2, 2, 4, 8, 0,…
## $ G1.m <int> 7, 8, 14, 10, 10, 12, 12, 8, 16, 10, 14, 8, 9, 8, 10, 16, …
## $ G2.m <int> 10, 6, 13, 9, 10, 12, 0, 9, 16, 11, 15, 0, 10, 11, 11, 15,…
## $ G3.m <int> 10, 5, 13, 8, 10, 11, 0, 8, 16, 11, 15, 0, 10, 11, 11, 15,…
## $ alc_use <dbl> 1.0, 3.0, 1.0, 1.0, 2.5, 1.0, 2.0, 2.0, 2.5, 1.0, 1.0, 1.5…
## $ high_use <lgl> FALSE, TRUE, FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, TRUE…
## $ cid <int> 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009, 3010…
So there are 370 cases and 51 variables. We’ll concentrate on
- alc_use, which is derived as a mean of Dalc and Walc (workday and weekend alcohol consumption, varying from 1 - very low to 5 - very high)
- high_use, which is dichotomization of alc_use with cut-point 2.
Choosing four interesting variables for further analysis. Response variable will be high/low alcohol consumpition.
Interesting/potential predictors I chose:
1. sex - student’s sex (binary: ‘F’ - female or ‘M’ - male). Gender will definitely has an effect on alcohol consumption. On average males drink more. We’ll see how it goes here.
2. internet - Internet access at home (binary: yes or no). Data is from 2014. Could be possible that subjects with internet connection are skulking at their homes instead of having a good time in restaurants etc. So internet access may decrease alcohol consumption. Maybe.
3. romantic - with a romantic relationship (binary: yes or no). Having a spouse may cause a lot of stress which leads to drinking (helping or not). On the other hand, relationship may decrease drinking (all time goes sitting hand in hand or spouse may prohibit alcohol use. Who knows, there are several possible associations.
4. absences - number of school absences (numeric: from 0 to 93). Drinking may cause school absences. Or there can be some common factor causing both absences and drinking. Caution: absences is count variable while other ones are dichotomous.
These predictors may have an effect, but after all it’s hard to know beforehand. Let’s explore variables and associations.
#Loading ggplot2 for better plotting opportunities
library(ggplot2)
g1 <- ggplot(data = alc, aes(x = high_use))
g1 + geom_bar(aes(fill=sex))
g1 + geom_bar(aes(fill=internet))
g1 + geom_bar(aes(fill=romantic))
g1 + geom_boxplot(aes(y=absences, fill=high_use)) +
stat_summary(fun=mean, geom="point", aes(y=absences), col="blue", size=5, shape="diamond") +
theme(legend.position="none")
Those were the graphs. Gender distribution seems to be a bit different in alcohol groups. Mean absence counts seems to higher in high use group (mean=blue diamond symbol). Absence distributions seems to be higly negatively skew, but at least distributions are similary shaped in both group. This is not perfect situation, but usually logistic regression tolerates quite well such phenomenon.
One-way table and crosstabulation of alcohol use vs. predictors. Original counts.
table("High alcohol use"=alc$high_use)
## High alcohol use
## FALSE TRUE
## 259 111
table("High alcohol use"=alc$high_use, "Sex"=alc$sex)
## Sex
## High alcohol use F M
## FALSE 154 105
## TRUE 41 70
table("High alcohol use"=alc$high_use, "Internet access at home (binary: yes or no)"=alc$internet)
## Internet access at home (binary: yes or no)
## High alcohol use no yes
## FALSE 42 217
## TRUE 15 96
table("High alcohol use"=alc$high_use, "Romantic relationship (binary: yes or no)"=alc$romantic)
## Romantic relationship (binary: yes or no)
## High alcohol use no yes
## FALSE 173 86
## TRUE 78 33
#Basic descriptives by group
tapply(alc$absences, alc$high_use, summary)
## $`FALSE`
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00 1.00 3.00 3.71 5.00 45.00
##
## $`TRUE`
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.000 2.000 4.000 6.378 9.000 44.000
Response counts are 259 (low) and 111 (high alcohol consumption). Crosstabs shows the same numbers as the previous plots. Abscense has higher mean in high alcohol use group (6.4 vs. 3.7).
Let’s have percentages (or proportions, to be exact) where figures are scaled for each row sum to be 100%. This can be achieved using pipe and function call prop.table(margin=1).
table("High alcohol use"=alc$high_use, "Sex"=alc$sex) %>% prop.table(margin=1)
## Sex
## High alcohol use F M
## FALSE 0.5945946 0.4054054
## TRUE 0.3693694 0.6306306
table("High alcohol use"=alc$high_use, "Internet access at home (binary: yes or no)"=alc$internet) %>% prop.table(margin=1)
## Internet access at home (binary: yes or no)
## High alcohol use no yes
## FALSE 0.1621622 0.8378378
## TRUE 0.1351351 0.8648649
table("High alcohol use"=alc$high_use, "Romantic relationship (binary: yes or no)"=alc$romantic) %>% prop.table(margin=1)
## Romantic relationship (binary: yes or no)
## High alcohol use no yes
## FALSE 0.6679537 0.3320463
## TRUE 0.7027027 0.2972973
Proportion of males is higher in high alcohol consumption group. Gender proportions in internet access and romantic relationship groups are pretty much the same.
General logistic model with multiple predictors can be defined as \[ \log\left(\frac{p({\bf x})}{1 - p({\bf x})}\right) = \beta_0 + \beta_1 x_1 + \ldots + \beta_{p - 1} x_{p - 1} \]
Let’s fit binary response logistic model with multiple predictors with glm function. Outcome is high_use with TRUE indicating high alcohol consumption as an event.
# find the model with glm()
# Model 1 (full model)
m1 <- glm(high_use ~ sex + internet + romantic + absences, data = alc, family = "binomial")
summary(m1)
##
## Call:
## glm(formula = high_use ~ sex + internet + romantic + absences,
## family = "binomial", data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2366 -0.8749 -0.6095 1.1165 2.0636
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.791835 0.354025 -5.061 4.16e-07 ***
## sexM 1.022570 0.245172 4.171 3.04e-05 ***
## internetyes 0.006198 0.340680 0.018 0.985
## romanticyes -0.217035 0.263397 -0.824 0.410
## absences 0.098186 0.023536 4.172 3.02e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 414.97 on 365 degrees of freedom
## AIC: 424.97
##
## Number of Fisher Scoring iterations: 4
Access to internet doesn’t seem to be statistically significant predictor for high alcohol use. Let’s remove it from the model and try again.
#Model 2
m2 <- glm(high_use ~ sex + romantic + absences, data = alc, family = "binomial")
summary(m2)
##
## Call:
## glm(formula = high_use ~ sex + romantic + absences, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2370 -0.8745 -0.6091 1.1168 2.0641
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.78707 0.23822 -7.502 6.30e-14 ***
## sexM 1.02291 0.24444 4.185 2.86e-05 ***
## romanticyes -0.21668 0.26268 -0.825 0.409
## absences 0.09823 0.02341 4.196 2.72e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 414.97 on 366 degrees of freedom
## AIC: 422.97
##
## Number of Fisher Scoring iterations: 4
Couple relationship either doesn’t seem to be statistically significant predictor for high alcohol. Let’s remove it from the model as well and try again.
#Model 3 (final model)
m3 <- glm(high_use ~ sex + absences, data = alc, family = "binomial")
summary(m3)
##
## Call:
## glm(formula = high_use ~ sex + absences, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2706 -0.8838 -0.5901 1.0960 1.9993
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.85303 0.22609 -8.196 2.49e-16 ***
## sexM 1.03451 0.24395 4.241 2.23e-05 ***
## absences 0.09671 0.02336 4.140 3.48e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 415.66 on 367 degrees of freedom
## AIC: 421.66
##
## Number of Fisher Scoring iterations: 4
Now both remaining factors are statistically significant (p<0.001), we can leave the model as it is. Slope parameter for Sex (being male as an event) is 1.03, with standard error 0.24, for the event of alcohol consumption being high. However, this is log-odds, which may be difficult to interpret. Therefor, let’s calculate and show Odds Ratios which are exponentiated versions of shown estimates, i.e. constant e is raised to power of that number.
This is somewhat plausible result. I’m still surprised that internet access and couple relationsip didn’t have any significant role.
# compute odds ratios (OR)
OR <- coef(m3) %>% exp
# compute confidence intervals (CI)
CI <- confint(m3) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.1567616 0.09877521 0.2401977
## sexM 2.8137400 1.75518106 4.5760120
## absences 1.1015380 1.05453436 1.1557571
So the Odds Ratio (95% Confidence Interval) for Sex is 2.81 (1.76-4.58) and for absences 1.10 (1.05-1.16). Males has 2.8-fold odds of becoming to high alcohol consumption group compared to females. Similarly, single unit increase in absence count scale makes odds of being in high alcohol consumption group grow 1.1-fold, i.e. it increases 10%.
Just to be sure, let’s fit single predictor model and compare manual OR calculation with R model results.
#Model 4 (single predictor model)
m4 <- glm(high_use ~ sex, data = alc, family = "binomial")
OR <- coef(m4) %>% exp
CI <- confint(m4) %>% exp
## Waiting for profiling to be done...
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.2662338 0.1863238 0.3717646
## sexM 2.5040650 1.5902950 3.9838892
# tabulate the target variable versus the predictor
table(high_use = alc$high_use, alc$sex)
##
## high_use F M
## FALSE 154 105
## TRUE 41 70
OR for sex in single predictor model was 2.5040650. We are supposed to get the same result when calculating males/girls ratio in high vs. low alcohol consumption group, i.e.
\[ \frac{154/105} {41/70} \]
#Calculate with R
(154/105)/(41/70)
## [1] 2.504065
Yes, it works.
Let’s see how previous model #3 is able predict alcohol usage.
Providing a 2x2 cross tabulation of predictions versus the actual values. First we need to predict values, add them into data set, categorize results and peep the data.
# predict() the probability of high_use
probabilities <- predict(m3, type = "response")
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)
# see the last ten original classes, predicted probabilities, and class predictions
select(alc, sex, probability, high_use, prediction) %>% tail(15)
## sex probability high_use prediction
## 356 M 0.3708919 TRUE FALSE
## 357 M 0.3937240 TRUE FALSE
## 358 M 0.3486225 TRUE FALSE
## 359 M 0.3486225 FALSE FALSE
## 360 M 0.5129601 TRUE TRUE
## 361 M 0.4646683 TRUE FALSE
## 362 M 0.3708919 TRUE FALSE
## 363 M 0.3486225 TRUE FALSE
## 364 M 0.3937240 TRUE FALSE
## 365 M 0.3708919 FALSE FALSE
## 366 M 0.3937240 TRUE FALSE
## 367 M 0.3060791 FALSE FALSE
## 368 M 0.3937240 TRUE FALSE
## 369 M 0.4887880 TRUE FALSE
## 370 M 0.3060791 FALSE FALSE
Now we are ready to see how predictions matches the reality.
# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 251 8
## TRUE 86 25
So, in high alcohol use group 86 cases are correctly predicted to be in that group, 25 cases are predicted to wrong group. In low alc. group prediction is correct in 251 case and incorrect in 8 case. Let’s see percentages.
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table(margin=1)
## prediction
## high_use FALSE TRUE
## FALSE 0.96911197 0.03088803
## TRUE 0.77477477 0.22522523
In low alc. group prediction goes right in 97% cases, in high group prediction is correct only in 23% of cases. Clearly model is prone to find cases of low alcohol consumption, but has difficulties to find users of higher level of alcohol.
Let’s plot the results.
# Some data management. Proportions into data frame.
props <- as.data.frame(table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table(margin=1))
# Plotting with ggplot2
ggplot(data = props) +
geom_bar(stat="identity", aes(x=high_use, y=Freq, fill=prediction)) +
scale_y_continuous(labels=scales::percent) +
ylab("Proportion")
# Plotting with ggplot2
ggplot(data = alc) +
geom_point(aes(x = probability, y = high_use, color = prediction), size=5, alpha=0.5) +
ylab("High alcohol use")
This shows the same story. With selected probability 50% model will catch most low users, but is not able to detect most cases in high alcohol use group. Maybe prob. threshold of 50% should be raised?
Let’s perform 10-fold cross-validation on our model and see if it has better test set performance (smaller prediction error using 10-fold cross-validation) compared to the model introduced in DataCamp (which had about 0.26 error).
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2540541
library(boot)
#10-fold cross-validation
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m3, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2567568
So this is only slightly lower prediction error than in DataCamp excercise - the same in practise.
Let’s conduct comparison of several cross-validations with different sets of predictors. Starting with a very high number of predictors and exploring the changes in the training and testing errors as moving to models with less predictors.
There was some problems with conducting analysis with huge number of analysis, estimation did not converge. Se let’s start with single predictor model and add predictors one by one until we’re in 20.
I first tried to do the whole thing with R functions, letting R to generate all models, validations and error estimates automatically. However, I wasn’t clever enough to create lists of predictors for glm without additional quotes, so I was stuck. Now modelling etc. is done manually model by model, which is akward…
m1 <- glm(high_use ~ school, data = alc, family = "binomial")
m2 <- glm(high_use ~ school + sex + age, data = alc, family = "binomial")
m3 <- glm(high_use ~ school + sex + age + address, data = alc, family = "binomial")
m4 <- glm(high_use ~ school + sex + age + address + famsize, data = alc, family = "binomial")
m5 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu, data = alc, family = "binomial")
m6 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu, data = alc, family = "binomial")
m7 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob, data = alc, family = "binomial")
m8 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob, data = alc, family = "binomial")
m9 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason, data = alc, family = "binomial")
m10 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian, data = alc, family = "binomial")
m11 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime, data = alc, family = "binomial")
m12 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime + studytime, data = alc, family = "binomial")
m13 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime + studytime + schoolsup, data = alc, family = "binomial")
m14 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime + studytime + schoolsup + famsup, data = alc, family = "binomial")
m15 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime + studytime + schoolsup + famsup + activities, data = alc, family = "binomial")
m16 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime + studytime + schoolsup + famsup + activities + nursery, data = alc, family = "binomial")
m17 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime + studytime + schoolsup + famsup + activities + nursery + higher, data = alc, family = "binomial")
m18 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime + studytime + schoolsup + famsup + activities + nursery + higher + internet, data = alc, family = "binomial")
m19 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime + studytime + schoolsup + famsup + activities + nursery + higher + internet + romantic, data = alc, family = "binomial")
m20 <- glm(high_use ~ school + sex + age + address + famsize + Pstatus + Medu + Fedu + Mjob + Fjob + reason + guardian + traveltime + studytime + schoolsup + famsup + activities + nursery + higher + internet + romantic + famrel, data = alc, family = "binomial")
That was the modeling. Next, let’s conduct 10-fold cross-validation for each model.
cv1 <- cv.glm(data = alc, cost = loss_func, glmfit = m1, K = 10)
cv2 <- cv.glm(data = alc, cost = loss_func, glmfit = m2, K = 10)
cv3 <- cv.glm(data = alc, cost = loss_func, glmfit = m3, K = 10)
cv4 <- cv.glm(data = alc, cost = loss_func, glmfit = m4, K = 10)
cv5 <- cv.glm(data = alc, cost = loss_func, glmfit = m5, K = 10)
cv6 <- cv.glm(data = alc, cost = loss_func, glmfit = m6, K = 10)
cv7 <- cv.glm(data = alc, cost = loss_func, glmfit = m7, K = 10)
cv8 <- cv.glm(data = alc, cost = loss_func, glmfit = m8, K = 10)
cv9 <- cv.glm(data = alc, cost = loss_func, glmfit = m9, K = 10)
cv10 <- cv.glm(data = alc, cost = loss_func, glmfit = m10, K = 10)
cv11 <- cv.glm(data = alc, cost = loss_func, glmfit = m11, K = 10)
cv12 <- cv.glm(data = alc, cost = loss_func, glmfit = m12, K = 10)
cv13 <- cv.glm(data = alc, cost = loss_func, glmfit = m13, K = 10)
cv14 <- cv.glm(data = alc, cost = loss_func, glmfit = m14, K = 10)
cv15 <- cv.glm(data = alc, cost = loss_func, glmfit = m15, K = 10)
cv16 <- cv.glm(data = alc, cost = loss_func, glmfit = m16, K = 10)
cv17 <- cv.glm(data = alc, cost = loss_func, glmfit = m17, K = 10)
cv18 <- cv.glm(data = alc, cost = loss_func, glmfit = m18, K = 10)
cv19 <- cv.glm(data = alc, cost = loss_func, glmfit = m19, K = 10)
cv20 <- cv.glm(data = alc, cost = loss_func, glmfit = m20, K = 10)
Let’s create a data frame comprising deltas and number of predictors.
deltas <- c(cv1$delta[1], cv2$delta[1], cv3$delta[1], cv4$delta[1], cv5$delta[1],
cv6$delta[1], cv7$delta[1], cv8$delta[1], cv9$delta[1], cv10$delta[1],
cv11$delta[1], cv12$delta[1], cv13$delta[1], cv14$delta[1], cv15$delta[1],
cv16$delta[1], cv17$delta[1], cv18$delta[1], cv19$delta[1], cv20$delta[1])
preds <- c(1:20)
compdata <- data.frame(deltas,preds)
And plotting the results.
ggplot(compdata, aes(x=preds, y=deltas)) +
geom_line()
We’ll see that average amount of wrong predictions in the cross validation is not increasing/decreasing linearly. However, looks like error is mostly increasing when more complicated models are fitted. Simple models are more preferable.
This week’s exercise is about cluster analysis and classification. Tasks include:
Setting working path, loading library and data set.
#Setting working path
setwd("/home/ls/R/projekteja/IODS-project/")
# access the MASS package
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
# load the data
data("Boston")
Boston data is included in the MASS package. Data is about housing values in suburbs of Boston city, with other demographic and environmental information as well. Further information: https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/Boston.html.
List of variables (from above url):
| Variable name | Definition |
|---|---|
| crim | per capita crime rate by town. |
| zn | proportion of residential land zoned for lots over 25,000 sq.ft. |
| indus | proportion of non-retail business acres per town. |
| chas | Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). |
| nox | nitrogen oxides concentration (parts per 10 million). |
| rm | average number of rooms per dwelling. |
| age | proportion of owner-occupied units built prior to 1940. |
| dis | weighted mean of distances to five Boston employment centres. |
| rad | index of accessibility to radial highways. |
| tax | full-value property-tax rate per $10,000. |
| ptratio | pupil-teacher ratio by town. |
| black | 1000(Bk - 0.63)\(^{2}\) where Bk is the proportion of blacks by town. |
| lstat | lower status of the population (percent). |
| medv | median value of owner-occupied homes in $1000s. |
Sources:
- Harrison, D. and Rubinfeld, D.L. (1978) Hedonic prices and the demand for clean air. J. Environ. Economics and Management 5, 81–102.
- Belsley D.A., Kuh, E. and Welsch, R.E. (1980) Regression Diagnostics. Identifying Influential Data and Sources of Collinearity. New York: Wiley.
Loading library dplyr for further DM needs.
library(dplyr)
#Data dimensions, variable names and some values.
glimpse(Boston)
## Rows: 506
## Columns: 14
## $ crim <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, 0.08829…
## $ zn <dbl> 18.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.5, 12.5, 12.5, 12.5, 12.5, …
## $ indus <dbl> 2.31, 7.07, 7.07, 2.18, 2.18, 2.18, 7.87, 7.87, 7.87, 7.87, 7…
## $ chas <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ nox <dbl> 0.538, 0.469, 0.469, 0.458, 0.458, 0.458, 0.524, 0.524, 0.524…
## $ rm <dbl> 6.575, 6.421, 7.185, 6.998, 7.147, 6.430, 6.012, 6.172, 5.631…
## $ age <dbl> 65.2, 78.9, 61.1, 45.8, 54.2, 58.7, 66.6, 96.1, 100.0, 85.9, …
## $ dis <dbl> 4.0900, 4.9671, 4.9671, 6.0622, 6.0622, 6.0622, 5.5605, 5.950…
## $ rad <int> 1, 2, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4…
## $ tax <dbl> 296, 242, 242, 222, 222, 222, 311, 311, 311, 311, 311, 311, 3…
## $ ptratio <dbl> 15.3, 17.8, 17.8, 18.7, 18.7, 18.7, 15.2, 15.2, 15.2, 15.2, 1…
## $ black <dbl> 396.90, 396.90, 392.83, 394.63, 396.90, 394.12, 395.60, 396.9…
## $ lstat <dbl> 4.98, 9.14, 4.03, 2.94, 5.33, 5.21, 12.43, 19.15, 29.93, 17.1…
## $ medv <dbl> 24.0, 21.6, 34.7, 33.4, 36.2, 28.7, 22.9, 27.1, 16.5, 18.9, 1…
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
The Boston data frame has 506 rows and 14 columns. All data is numeric. Variables are continuous expect binary variable chas, (“1 if tract bounds Charles River; 0 otherwise”). Response/target variable will be crim (“per capita crime rate by town”), which has range 0.06-89.0 and mean=3.6 & median=0.26. Difference between mean and median suggests strongly skewed distribution.
Descriptives were already shown in step 2. Variable distributions have clearly distinct variances and locations. Some variables seem to be strongly skewed as well. Let’s have a visual.
# Bivariate scatter plots (excluding 4th variable chas)
par(mfrow = c(1,2), pin=c(1.75,1.75))
hist(Boston$crim,col="blue",xlab=NULL)
boxplot(Boston$crim,col="blue", main="Boxplot of Boston$crim")
We can see that crime distribution is strongly skewed. We are going to standardize and categorize variables later.
Pairwise distributions (the plot is enlarged with defining additional R-chunk parameters, which are not shown in output):
# Bivariate scatter plots (excluding 4th variable chas)
pairs(Boston[,c(1:3,5:dim(Boston)[2])], pch=19, cex=0.05, lower.panel=NULL)
Scatter plot matrix shows that bivariate distributions are not always multinormal (or almost never, to be honest). Some distributions do have large empty areas and/or outlier observations, like rad (index of accessibility to radial highways). Not good.
Some mutual bivariate associations seems to be stronger than others. Let’s calculate correlations.
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) %>% round(digits=2)
# print the correlation matrix
cor_matrix
## crim zn indus chas nox rm age dis rad tax ptratio
## crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58 0.29
## zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31 -0.39
## indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72 0.38
## chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04 -0.12
## nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67 0.19
## rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29 -0.36
## age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51 0.26
## dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53 -0.23
## rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91 0.46
## tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00 0.46
## ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46 1.00
## black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44 -0.18
## lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54 0.37
## medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47 -0.51
## black lstat medv
## crim -0.39 0.46 -0.39
## zn 0.18 -0.41 0.36
## indus -0.36 0.60 -0.48
## chas 0.05 -0.05 0.18
## nox -0.38 0.59 -0.43
## rm 0.13 -0.61 0.70
## age -0.27 0.60 -0.38
## dis 0.29 -0.50 0.25
## rad -0.44 0.49 -0.38
## tax -0.44 0.54 -0.47
## ptratio -0.18 0.37 -0.51
## black 1.00 -0.37 0.33
## lstat -0.37 1.00 -0.74
## medv 0.33 -0.74 1.00
These are Pearson correlation coefficients, which are not always reliable in a case of non-normality. But let’s keep them anyway, like in Datacamp. However, it’s challenging to assimilate so many numbers. Better to have a visualization.
# installing and/or loading corrplot library for correlation coefficient visualization
if (!require("corrplot")) {
install.packages("corrplot")
library(corrplot)
}
## Loading required package: corrplot
## corrplot 0.84 loaded
# visualize the correlation matrix
corrplot.mixed(cor_matrix, tl.cex=0.75, number.cex=0.75, number.digits=2)
That’s fancy way to present correlation coefficient. We can directly see that there are some strong correlations indicated by big spheres, blue ones for negative and red ones for positive values. Highest value is +0.91, which is between rad and tax (index of accessibility to radial highways and full-value property-tax rate per $10,000).
# center and standardize variables
# 'as.data.frame' is needed since it makes referring variables easier later
boston_scaled <- as.data.frame(scale(Boston))
# Bivariate scatter plots
pairs(boston_scaled, pch=19, cex=0.05, lower.panel=NULL)
#Means, second argument refers to columns
apply(boston_scaled, 2, FUN=mean)
## crim zn indus chas nox
## -7.202981e-18 2.282481e-17 1.595296e-17 -3.544441e-18 -2.150022e-16
## rm age dis rad tax
## -1.056462e-16 -1.643357e-16 1.153079e-16 4.799652e-17 2.024415e-17
## ptratio black lstat medv
## -3.924246e-16 -1.151679e-16 -7.052778e-17 -1.374631e-16
#Variances, second argument refers to columns
apply(boston_scaled, 2, FUN=var)
## crim zn indus chas nox rm age dis rad tax
## 1 1 1 1 1 1 1 1 1 1
## ptratio black lstat medv
## 1 1 1 1
We can see that now all variables has been standardized, i.e. scaled to have mean=0 (or very close to it, at least) and variance=1. This can’t heal difficult distributions like rad or chas has, though. Correlations are the same, so they are not shown here again.
Note: For some reason scaling must be for R data.frame object, otherwise further operations won’t work. This means that command
boston_scaled <- scale(Boston)
is not enought, but I need to put it this way:
boston_scaled <- as.data.frame(scale(Boston)).
So strange… And the same applies for task #7.
Creating categorized version of crime variable. Cut points are quantiles, so number of categories will be four.
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks=bins, include.lowest=TRUE, labels=c("low","med_low","med_high","high"))
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
Looks fine, distribution is as close to 25/25/25/25 percentages as possible.
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
Now original crim has been replaced to categorical crime.
# number of rows in the Boston dataset
n <- nrow(boston_scaled)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
dim(train)
## [1] 404 14
dim(test)
## [1] 102 14
Now datasets train and test has been created, with the first comprising 80% and latter 20% of original cases.
Fitting LDA model on the train set with crime rate categorizations as target variable. Other variables are used as predictors.
# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2475248 0.2351485 0.2599010 0.2574257
##
## Group means:
## zn indus chas nox rm age
## low 0.90026395 -0.9095128 -0.07547406 -0.8680538 0.43367233 -0.8583616
## med_low -0.08126141 -0.3134317 -0.02367011 -0.6021895 -0.08081465 -0.4014804
## med_high -0.38760189 0.1710286 0.14012905 0.3367277 0.08335352 0.3908254
## high -0.48724019 1.0170690 -0.04518867 1.0292447 -0.42496303 0.8005551
## dis rad tax ptratio black lstat
## low 0.8281621 -0.6947544 -0.7254613 -0.38686128 0.3837518 -0.77251305
## med_low 0.3443346 -0.5551250 -0.4778913 0.02686008 0.3004743 -0.21191160
## med_high -0.3488916 -0.4196693 -0.3190424 -0.25176495 0.1024380 0.05023711
## high -0.8622595 1.6386213 1.5144083 0.78135074 -0.7934970 0.90970560
## medv
## low 0.51083402
## med_low 0.04507529
## med_high 0.17143630
## high -0.67539221
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.083566396 0.728208246 -0.97983600
## indus 0.038831509 -0.406892610 0.31385706
## chas -0.067575803 -0.019117553 0.08563572
## nox 0.335193849 -0.712335578 -1.38014036
## rm -0.124906878 -0.090013465 -0.11875611
## age 0.234390895 -0.350203395 -0.24783851
## dis -0.064944192 -0.397200930 0.07773582
## rad 3.149085529 0.787832413 -0.20626813
## tax 0.008207372 0.165809137 0.64560659
## ptratio 0.102881216 0.065647276 -0.15497300
## black -0.116462602 0.008214125 0.06677193
## lstat 0.188440209 -0.258920943 0.35008722
## medv 0.185315212 -0.389698563 -0.13029405
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9510 0.0378 0.0112
classes <- as.numeric(train$crime)
# LDA (bi)plot
plot(lda.fit, col=classes)
Interestingly plot() function adds group names as data value symbols independently on pch parameters. For example, following commands are yielding identical plots:
- plot(lda.fit, col=classes, pch=19)
- plot(lda.fit, col=classes, pch=classes)
I have no clue why. Showing category value as symbols looks very awkward.
Anyway, plot visualizes how target variable classes are separated by the linear combinations of predictor variables. Looks like crime quantile groups are mostly very nicely separated. High crime group seems to be most consistent while med_high is most scattered around.
Let’s enhance plot by adding arrows into it, just like in Datacamp exercise.
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
plot(lda.fit, col=classes, pch=classes, dimen=2)
lda.arrows(lda.fit, myscale = 2)
Looks like rad (“index of accessibility to radial highways”) is an important factor here. Watch out for living close to radial highways!
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 18 7 2 0
## med_low 4 18 9 0
## med_high 0 2 18 1
## high 0 0 0 23
Cross-tabulation shows that in 13+17+15+20 cases (which is 65) prediction is fully correct, leaving out 37 more or less incorrect cases. Proportion of correct predictions is hence 0.6372549. We can see that if prediction has gone wrong, it has still has been mostly placed into category close to the real one. And, we can see that all 20 high crimes cases are correctly predicted. Not bad at all, I guess.
# reloading the Boston dataset
data('Boston')
# standardizing the dataset
# again, as.data.frame is needed, otherwise further operations won't work.
boston_scaled <- as.data.frame(scale(Boston))
# euclidean and manhattan distance matrix
dist_eu <- dist(boston_scaled)
dist_man <- dist(boston_scaled, method="manhattan")
# look at the summaries of the distances
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.2662 8.4832 12.6090 13.5488 17.7568 48.8618
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.2662 8.4832 12.6090 13.5488 17.7568 48.8618
par(mfrow = c(2,2), pin=c(1.75,1.75))
#Euclidean
hist(dist_eu,col="blue",xlab=NULL)
boxplot(dist_eu,col="blue", main="Boxplot of dist_eu")
#Manhattan
hist(dist_man,col="blue",xlab=NULL)
boxplot(dist_man,col="blue", main="Boxplot of dist_man")
Mean value for Euclidean distance is around 5, range is 0.1 - 14.4. Distribution is slightly skewed to the right.
Mean value for Manhattan distance is around 14, range is 0.3 - 48.9. Distribution is a bit more skewed to the right.
Let’s conduct k-means clustering with three clusters, which, I think, might be good low-but-not-super-low number for centers/clusters.
# k-means clustering with three clusters (=semirandomly selected number)
km <-kmeans(boston_scaled, centers=3)
# plot the Boston dataset with clusters
pairs(boston_scaled, col=km$cluster, pch=19, cex=0.05, lower.panel=NULL)
Three cluster solutions might be plausible. Groups are formed nicely. But let’s see which is optimal number based on calculations.
# Setting a seed for random generator
set.seed(322654435)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
# visualize the results
library(ggplot2)
ggplot() +
geom_line(aes(x=1:k_max, y=twcss)) +
scale_x_continuous(breaks=c(1:10)) +
xlab("Number of cluster")
If the principle is that the optimal number of clusters is when the value of total WCSS changes radically, then two clusters would be good choice now. After two clusters total within sum of squares decreased slower. So let’s conduct k-means clustering again, now with two centers, and visulize results.
# k-means clustering
km2 <-kmeans(boston_scaled, centers=2)
# plot the Boston dataset with clusters
pairs(boston_scaled, col=km2$cluster, pch=19, cex=0.05, lower.panel=NULL)
This looks plausible as well. Dot colors indicating cluster seem to mostly create distinct groups in these subplots.
Reloading and scaling Boston data set. AFAIK, this is not needed to done again, but let’s follow instructions. K-means clustering with three center. Fitting LDA with cluster as a target variable, keeping all Boston variables as predictors. Adding arrows with previously created custom function, replacing default red color with blue for better definition. Aspect ratio, text size and arrow length scaling changed from default values as well.
# reloading the Boston dataset
data('Boston')
boston_scaled <- as.data.frame(scale(Boston))
# k-means clustering with three centers
km3 <- kmeans(boston_scaled, centers=3)
km3
## K-means clustering with 3 clusters of sizes 213, 129, 164
##
## Cluster means:
## crim zn indus chas nox rm
## 1 -0.3792002 -0.3640439 -0.2654730 0.004931512 -0.3761109 -0.2611171
## 2 -0.3978261 1.2205329 -0.9803713 -0.028167813 -0.8262430 1.0139959
## 3 0.8054220 -0.4872402 1.1159370 0.015751437 1.1383961 -0.4584605
## age dis rad tax ptratio black
## 1 -0.06380464 0.1123306 -0.5877259 -0.5858342 0.04591154 0.2799627
## 2 -0.90963302 0.9031496 -0.5839140 -0.6780000 -0.82270661 0.3543735
## 3 0.79837224 -0.8562971 1.2226252 1.2941749 0.58749997 -0.6423552
## lstat medv
## 1 -0.09228022 -0.1222510
## 2 -0.94791366 1.0675487
## 3 0.86546676 -0.6809409
##
## Clustering vector:
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
## 2 1 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2
## 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
## 2 2 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 1
## 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80
## 1 1 1 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1
## 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
## 2 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2
## 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120
## 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140
## 1 1 1 1 1 1 1 3 3 3 1 1 1 1 3 3 3 3 3 3
## 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160
## 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 1 1 3
## 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180
## 1 2 2 2 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 2
## 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200
## 2 1 2 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2
## 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220
## 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240
## 1 1 1 1 2 2 2 1 2 2 1 2 2 2 1 1 1 2 2 2
## 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260
## 2 1 2 2 1 1 2 1 2 2 2 2 2 2 2 2 2 2 2 2
## 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280
## 2 2 2 2 2 1 2 2 2 1 1 2 1 2 2 2 2 2 2 2
## 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300
## 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 2 1 1 2 2
## 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320
## 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1
## 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340
## 1 1 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1
## 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360
## 1 2 1 2 2 1 1 2 2 2 2 2 2 2 2 2 3 3 3 3
## 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380
## 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
## 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400
## 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
## 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420
## 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
## 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440
## 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
## 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460
## 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
## 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480
## 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
## 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500
## 3 3 3 3 3 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1
## 501 502 503 504 505 506
## 1 1 1 1 1 1
##
## Within cluster sum of squares by cluster:
## [1] 1083.594 1064.601 1727.469
## (between_SS / total_SS = 45.2 %)
##
## Available components:
##
## [1] "cluster" "centers" "totss" "withinss" "tot.withinss"
## [6] "betweenss" "size" "iter" "ifault"
# LDA with cluster as a target
lda2.fit <- lda(km3$cluster ~ ., data=boston_scaled)
# print the lda.fit object
lda2.fit
## Call:
## lda(km3$cluster ~ ., data = boston_scaled)
##
## Prior probabilities of groups:
## 1 2 3
## 0.4209486 0.2549407 0.3241107
##
## Group means:
## crim zn indus chas nox rm
## 1 -0.3792002 -0.3640439 -0.2654730 0.004931512 -0.3761109 -0.2611171
## 2 -0.3978261 1.2205329 -0.9803713 -0.028167813 -0.8262430 1.0139959
## 3 0.8054220 -0.4872402 1.1159370 0.015751437 1.1383961 -0.4584605
## age dis rad tax ptratio black
## 1 -0.06380464 0.1123306 -0.5877259 -0.5858342 0.04591154 0.2799627
## 2 -0.90963302 0.9031496 -0.5839140 -0.6780000 -0.82270661 0.3543735
## 3 0.79837224 -0.8562971 1.2226252 1.2941749 0.58749997 -0.6423552
## lstat medv
## 1 -0.09228022 -0.1222510
## 2 -0.94791366 1.0675487
## 3 0.86546676 -0.6809409
##
## Coefficients of linear discriminants:
## LD1 LD2
## crim -0.032231372 -0.180555401
## zn 0.009313889 -1.068645925
## indus 0.624151354 -0.006677424
## chas 0.039930549 0.124911531
## nox 1.097435166 -0.777946957
## rm -0.193309184 -0.586124321
## age -0.164347859 0.406286594
## dis 0.052315163 -0.286780919
## rad 0.701339082 -0.182828969
## tax 1.041861064 -0.515465919
## ptratio 0.245758857 0.034929672
## black -0.020655070 0.015978407
## lstat 0.187227781 -0.395217119
## medv -0.083275066 -0.768211713
##
## Proportion of trace:
## LD1 LD2
## 0.8504 0.1496
# target classes as numeric
classes <- as.numeric(km3$cluster)
# plot the lda results
plot(lda2.fit, col=classes, pch=classes)
# Using previously defined function for lda biplot arrows
lda.arrows(lda2.fit, color="blue", tex=0.8, myscale=5)
Looks like variables nox (nitrogen oxides concentration, parts per 10 million), tax (full-value property-tax rate per $10,000) and zn (proportion of residential land zoned for lots over 25,000 sq.ft) are the most influental separators for the clusters among all variables, based on arrow lengths.
Running the given code for the scaled train data. The code creates a matrix product, which is a projection of the data points. Installing and loading plotly package and creating 3D plots.
Adjusting the code. Defining symbol color as train data set crime class. Drawing another 3D plot where the color is defined by the clusters of the k-means. Plots are only shown in RStudio viewer, so no output in course diary.
# creating train set (again)
train <- boston_scaled[ind,]
#K-means clustering with three centers:
km4 <- kmeans(train, centers=3)
#restoring original crime classification into train data
crime <- cut(train$crim, breaks=bins, include.lowest=TRUE, labels=c("low","med_low","med_high","high"))
#model_predictors <- dplyr::select(train, everything())
model_predictors <- dplyr::select(train, c(-crim))
#dim(model_predictors)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
# installing and/or loading plotly package for 3D plot
if (!require("plotly")) {
install.packages("plotly")
library(plotly)
}
## Loading required package: plotly
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
#Create a 3D plot (Cool!) of the columns of the matrix product by typing the code below.
#plot_ly(x=matrix_product$LD1, y=matrix_product$LD2, z=matrix_product$LD3, type='scatter3d', mode='markers')
#Graph 1
plot_ly(x=matrix_product$LD1, y=matrix_product$LD2, z=matrix_product$LD3, color=crime, type='scatter3d', mode='markers')
## Warning: `arrange_()` is deprecated as of dplyr 0.7.0.
## Please use `arrange()` instead.
## See vignette('programming') for more help
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
#Graph 2
plot_ly(x=matrix_product$LD1, y=matrix_product$LD2, z=matrix_product$LD3, color=as.factor(km4$cluster), type='scatter3d', mode='markers')
How do plots differ? Well, data points are identical in x-y-z space. Only number of categories/colors (4 vs. 3) differ, so do group distribution. After all, both plots are telling pretty much the same story: there is one clearly distinct blob (upper quartile of crime rates) while rest of dots are somewhat grouped as well. Maybe there is bit more overlapping (variation) in crime category plot.
This week’s exercise is about reducing dimensions. Tasks include:
Setting working path, loading library and data set.
#Setting working path
setwd("/home/ls/R/projekteja/IODS-project/")
#Reading dataset from the file
human <- read.table("./data/human.rData")
#n=155, 8 vars.
Showing a graphical overview of the data and summaries of the variables in the data.
library(GGally) #for ggpairs
library(ggplot2) #for ggplot
library(corrplot) #for corrplot
library(tidyr) #for gather
cols <- colnames(human)
#Boxplots for each variable
gather(human[cols]) %>% ggplot(aes(y=value)) +
facet_wrap("key", scales = "free", ncol=8) +
geom_boxplot(fill="#FFDB6D") +
theme(strip.text.x=element_text(size = 6),
axis.text.x=element_text(size = 5))
#Histogram for each variable
gather(human[cols]) %>% ggplot(aes(y=value)) +
facet_wrap("key", scales = "free", ncol=8) +
geom_histogram(fill="#FFDB6D",col="black") +
theme(strip.text.x=element_text(size = 6),
axis.text.x=element_text(size = 5))
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
#Pairwise and single distributions with density lines
par(col.sub="white")
ggpairs(human, aes(alpha=0.3),
upper=list(continuous="density"),
lower=list(combo=wrap("facethist"))) +
theme(panel.background=element_rect(fill="white", colour="grey50"))
#Correlation plots
cor_matrix<-cor(human)
corrplot.mixed(cor_matrix,
tl.cex=0.75, number.cex=0.75, number.digits=2, lower.col="black")
summary(human)
## edu.ratio lab.ratio exp.life exp.educ
## Min. :0.1717 Min. :0.1857 Min. :49.00 Min. : 5.40
## 1st Qu.:0.7264 1st Qu.:0.5984 1st Qu.:66.30 1st Qu.:11.25
## Median :0.9375 Median :0.7535 Median :74.20 Median :13.50
## Mean :0.8529 Mean :0.7074 Mean :71.65 Mean :13.18
## 3rd Qu.:0.9968 3rd Qu.:0.8535 3rd Qu.:77.25 3rd Qu.:15.20
## Max. :1.4967 Max. :1.0380 Max. :83.50 Max. :20.20
## GNI Mat.Mor Adol.BR Rep.pct
## Min. : 581 Min. : 1.0 Min. : 0.60 Min. : 0.00
## 1st Qu.: 4198 1st Qu.: 11.5 1st Qu.: 12.65 1st Qu.:12.40
## Median : 12040 Median : 49.0 Median : 33.60 Median :19.30
## Mean : 17628 Mean : 149.1 Mean : 47.16 Mean :20.91
## 3rd Qu.: 24512 3rd Qu.: 190.0 3rd Qu.: 71.95 3rd Qu.:27.95
## Max. :123124 Max. :1100.0 Max. :204.80 Max. :57.50
All variables are numerical and continuous and they have clearly different scales and different shapes. Largest range is ~600 to ~12 000 (GNI; Gross national income per capita) and smallest from ~0 to ~1 (lab.ratio; Labour female/male ratio). Most distributions are more or less skewed.
Correlations (Pearson’s \(r\)) between variables are mostly high or moderate, expect between lab.ratio; Labour female/male ratio and other variables, and between Rep.pct; Female % representation in Parliament and other variables. Some of the coefficients are indicating positive correlation, others negative one (please see corr.plot below for details).
Performing principal component analysis (PCA) on the not standardized human data. Showing the variability captured by the principal components. Drawing a biplot displaying the observations by the first two principal components (PC1 coordinate in x-axis, PC2 coordinate in y-axis), along with arrows representing the original variables.
# perform principal component analysis (with the SVD method)
pca_human <- prcomp(human)
#PCA results in nutshell
summary(pca_human)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8
## Standard deviation 1.854e+04 185.5219 25.19 11.45 3.766 1.566 0.1912 0.1591
## Proportion of Variance 9.999e-01 0.0001 0.00 0.00 0.000 0.000 0.0000 0.0000
## Cumulative Proportion 9.999e-01 1.0000 1.00 1.00 1.000 1.000 1.0000 1.0000
# draw a biplot of the principal component representation and the original variables
biplot(pca_human, choices = 1:2, cex=c(0.5,1), col=c("grey","deeppink"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
OK, this doesn’t make sense. GNI dominates the model since it has so large scale compared to other ones. Would be against the spirit of statistics to describe this more.
Standardizing the variables in the human data and repeating the above analysis. Interpreting the results of both analysis (with and without standardizing). Comparing results.
# standardize the variables
human_std <- scale(human)
# print out summaries of the standardized variables
summary(human_std)
## edu.ratio lab.ratio exp.life exp.educ
## Min. :-2.8189 Min. :-2.6247 Min. :-2.7188 Min. :-2.7378
## 1st Qu.:-0.5233 1st Qu.:-0.5484 1st Qu.:-0.6425 1st Qu.:-0.6782
## Median : 0.3503 Median : 0.2316 Median : 0.3056 Median : 0.1140
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5958 3rd Qu.: 0.7350 3rd Qu.: 0.6717 3rd Qu.: 0.7126
## Max. : 2.6646 Max. : 1.6632 Max. : 1.4218 Max. : 2.4730
## GNI Mat.Mor Adol.BR Rep.pct
## Min. :-0.9193 Min. :-0.6992 Min. :-1.1325 Min. :-1.8203
## 1st Qu.:-0.7243 1st Qu.:-0.6496 1st Qu.:-0.8394 1st Qu.:-0.7409
## Median :-0.3013 Median :-0.4726 Median :-0.3298 Median :-0.1403
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.3712 3rd Qu.: 0.1932 3rd Qu.: 0.6030 3rd Qu.: 0.6127
## Max. : 5.6890 Max. : 4.4899 Max. : 3.8344 Max. : 3.1850
Now each variable has mean 0 and standard deviation (and variance) 1.
# perform principal component analysis (with the SVD method)
pca_human_std <- prcomp(human_std)
pca_human_std
## Standard deviations (1, .., p=8):
## [1] 2.0708380 1.1397204 0.8750485 0.7788630 0.6619563 0.5363061 0.4589994
## [8] 0.3222406
##
## Rotation (n x k) = (8 x 8):
## PC1 PC2 PC3 PC4 PC5
## edu.ratio -0.35664370 0.03796058 -0.24223089 0.62678110 -0.5983585
## lab.ratio 0.05457785 0.72432726 -0.58428770 0.06199424 0.2625067
## exp.life -0.44372240 -0.02530473 0.10991305 -0.05834819 0.1628935
## exp.educ -0.42766720 0.13940571 -0.07340270 -0.07020294 0.1659678
## GNI -0.35048295 0.05060876 -0.20168779 -0.72727675 -0.4950306
## Mat.Mor 0.43697098 0.14508727 -0.12522539 -0.25170614 -0.1800657
## Adol.BR 0.41126010 0.07708468 0.01968243 0.04986763 -0.4672068
## Rep.pct -0.08438558 0.65136866 0.72506309 0.01396293 -0.1523699
## PC6 PC7 PC8
## edu.ratio 0.17713316 0.05773644 0.16459453
## lab.ratio -0.03500707 -0.22729927 -0.07304568
## exp.life -0.42242796 -0.43406432 0.62737008
## exp.educ -0.38606919 0.77962966 -0.05415984
## GNI 0.11120305 -0.13711838 -0.16961173
## Mat.Mor 0.17370039 0.35380306 0.72193946
## Adol.BR -0.76056557 -0.06897064 -0.14335186
## Rep.pct 0.13749772 0.00568387 -0.02306476
summary(pca_human_std)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 2.0708 1.1397 0.87505 0.77886 0.66196 0.53631 0.45900
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595 0.02634
## Cumulative Proportion 0.5361 0.6984 0.79413 0.86996 0.92473 0.96069 0.98702
## PC8
## Standard deviation 0.32224
## Proportion of Variance 0.01298
## Cumulative Proportion 1.00000
# draw a biplot of the principal component representation and the original variables
biplot(pca_human_std, choices = 1:2, cex=c(0.5,1), col=c("grey","deeppink"))
This seems more reasonable, each variables has comparable weight in the analysis.
Unstandardized analysis: GNI dominates the model since it has so large scale compared to other ones. Practically all variability is associated with first principal component (proportion rounds up to 100%).
Standardized analysis: 53% of variance is associated with 1st principal component, 16% with second, 10 % with 3rd etc. Most variables are contributing nicely the first PC, with moderate negative or positive correlations while Lab.ratio and Rep.pct are not contributing that much.
From the biplot we can see that all variables has somewhat similar arrow size i.e. effect. Most variables are contributing to first PC since arrows are parallel to PC1. Lab.ratio and Rep.pct are almost orthogonal to that, they are associated with PC2.
Both models are conducted with singular value decomposition (SVD) method.
Personal interpretations of the first two principal component dimensions based on the biplot drawn after PCA on the standardized human data:
So PC1+PC2 explains 54%+16% of total variance.
PC1 is mostly contributed by Mat.Mor (Maternal mortality ratio), Adol.BR (Adolescent birth rate) and with opposite direction by exp.life (Life expectancy at birth), lab.ratio (Female/male ratio of population with secondary education), exp.educ (Expected years of schooling) and GNI (Gross national income (GNI) per capita). So we can interpret this as contextually negative factor, which is getting higher values from poor and unwealthy conditions and low gender equality and education.
PC2 is mostly contributed by lab.ratio (Female/male ratio of labour force participation ratios) and rep.pct (Percent of female representation in Parliament). This is “positive” factor, which is associated with better gender equality.
Maybe one could summarize all this that PC1 is more about economy and PC2 is about gender equality.
library(FactoMineR)
–> "there is no package called ‘FactoMineR’
Me: “OK, no problem. Let’s install it.”
install.packages(“FactoMineR”)
–>
1: In utils::install.packages(“car”, repos = “https://cran.rstudio.com/”) : installation of package ‘nloptr’ had non-zero exit status
2: In utils::install.packages(“car”, repos = “https://cran.rstudio.com/”) : installation of package ‘lme4’ had non-zero exit status
3: In utils::install.packages(“car”, repos = “https://cran.rstudio.com/”) : installation of package ‘car’ had non-zero exit status
4: In utils::install.packages(“FactoMineR”, repos = “https://cran.rstudio.com/”) : installation of package ‘FactoMineR’ had non-zero exit status
Me: “Hmm… no problem. Lets install those missing dependencies first then.”
install.packages(“car”)
–>
ERROR: configuration failed for package ‘nloptr’
ERROR: dependency ‘nloptr’ is not available for package ‘lme4’
* removing ‘/home/ls/R/x86_64-suse-linux-gnu-library/3.5/lme4’
Warning in install.packages :
installation of package ‘lme4’ had non-zero exit status
ERROR: dependencies ‘pbkrtest’, ‘lme4’ are not available for package ‘car’
Me: “Me:”Well, this is getting akward. But we’ll sort this out. Lets install nloptr first then."
install.packages(“nloptr”)
–>
../libtool: line 1102: ERROR:: command not found
make[2]: *** [Makefile:371: libutil.la] Error 127
ERROR: configuration failed for package ‘nloptr’
Me: “☠#💩!”
…A lot of surfing around…
Internet: “Installing library libnlopt0 into operating system may help.”
[installing libnlopt0 (A library for nonlinear optimization) with OS software management system]
install.packages(“nloptr”)
–>
../libtool: line 1102: ERROR:: command not found
make[2]: *** [Makefile:371: libutil.la] Error 127
ERROR: configuration failed for package ‘nloptr’
Me: "☠#☣%☭!
…more surfing around…
Internet: “Installing libnlopt0 may be the key.”
[installing nlopt-devel (Development files for nlopt) with OS software management system]
install.packages(“nloptr”)
It worked! Woohoo!
install.packages(“lme4”)
Works! Yippee!
install.packages(“car”)
–> R: “No way, dude; dependency ‘pbkrtest’ is not available for package ‘car’”.
install.packages(“pbkrtest”)
–> R: “Come on, such package doesn’t even exist!”.
Me: “☠#☣%☭¤💩!!!”
…still more surfing around…
Me: "Let’s try the trick from https://stackoverflow.com/questions/35207624/package-pbkrtest-is-not-available-for-r-version-3-2-2, with updated package version.
Manual installation of pbkrtest_0.4-7:
packageurl <- “https://cran.r-project.org/src/contrib/Archive/pbkrtest/pbkrtest_0.4-7.tar.gz” install.packages(packageurl, repos=NULL, type=“source”)
It worked!
install.packages(“car”)
Works now!!
install.packages(“FactoMineR”)
And finally this works as well! Victory!!!
library(FactoMineR)
#Loading tea data set.
data('tea')
dim(tea)
## [1] 300 36
str(tea)
## 'data.frame': 300 obs. of 36 variables:
## $ breakfast : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
## $ tea.time : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
## $ evening : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ dinner : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
## $ always : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
## $ home : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
## $ work : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
## $ tearoom : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
## $ friends : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
## $ resto : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ price : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
## $ age : int 39 45 47 23 48 21 37 36 40 37 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ SPC : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
## $ Sport : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
## $ age_Q : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
## $ frequency : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
## $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
## $ spirituality : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
## $ healthy : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
## $ diuretic : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
## $ friendliness : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
## $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ feminine : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
## $ sophisticated : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
## $ slimming : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ exciting : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
## $ relaxing : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
## $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
summary(tea)
## breakfast tea.time evening lunch
## breakfast :144 Not.tea time:131 evening :103 lunch : 44
## Not.breakfast:156 tea time :169 Not.evening:197 Not.lunch:256
##
##
##
##
##
## dinner always home work
## dinner : 21 always :103 home :291 Not.work:213
## Not.dinner:279 Not.always:197 Not.home: 9 work : 87
##
##
##
##
##
## tearoom friends resto pub
## Not.tearoom:242 friends :196 Not.resto:221 Not.pub:237
## tearoom : 58 Not.friends:104 resto : 79 pub : 63
##
##
##
##
##
## Tea How sugar how
## black : 74 alone:195 No.sugar:155 tea bag :170
## Earl Grey:193 lemon: 33 sugar :145 tea bag+unpackaged: 94
## green : 33 milk : 63 unpackaged : 36
## other: 9
##
##
##
## where price age sex
## chain store :192 p_branded : 95 Min. :15.00 F:178
## chain store+tea shop: 78 p_cheap : 7 1st Qu.:23.00 M:122
## tea shop : 30 p_private label: 21 Median :32.00
## p_unknown : 12 Mean :37.05
## p_upscale : 53 3rd Qu.:48.00
## p_variable :112 Max. :90.00
##
## SPC Sport age_Q frequency
## employee :59 Not.sportsman:121 15-24:92 1/day : 95
## middle :40 sportsman :179 25-34:69 1 to 2/week: 44
## non-worker :64 35-44:40 +2/day :127
## other worker:20 45-59:61 3 to 6/week: 34
## senior :35 +60 :38
## student :70
## workman :12
## escape.exoticism spirituality healthy
## escape-exoticism :142 Not.spirituality:206 healthy :210
## Not.escape-exoticism:158 spirituality : 94 Not.healthy: 90
##
##
##
##
##
## diuretic friendliness iron.absorption
## diuretic :174 friendliness :242 iron absorption : 31
## Not.diuretic:126 Not.friendliness: 58 Not.iron absorption:269
##
##
##
##
##
## feminine sophisticated slimming exciting
## feminine :129 Not.sophisticated: 85 No.slimming:255 exciting :116
## Not.feminine:171 sophisticated :215 slimming : 45 No.exciting:184
##
##
##
##
##
## relaxing effect.on.health
## No.relaxing:113 effect on health : 66
## relaxing :187 No.effect on health:234
##
##
##
##
##
Tea data description from FactoMineR package:
A data frame with 300 rows and 36 columns. Rows represent the individuals, columns represent the different questions. The first 18 questions are active ones, the 19th is a supplementary quantitative variable (the age) and the last variables are supplementary categorical variables.
Let’s remove supplementary variables, i.e. retain first 18 questions to keep this simple. Number of cases is only 300, so might be good idea to avoid super-complicated models.
#Using only first "active questions" only
tea2 <- dplyr::select(tea,1:18)
#Barplots for each variable
gather(tea2) %>% ggplot(aes(value)) +
geom_bar(col="black", fill="#FFDB6D", width=0.667) +
facet_wrap("key", scales="free", ncol=6) +
labs(x="") +
theme(
axis.text.x=element_text(angle=45, hjust=1, size=7),
panel.background = element_blank(),
strip.background = element_blank()
)
## Warning: attributes are not identical across measure variables;
## they will be dropped
# multiple correspondence analysis
mca <- MCA(tea2, graph=FALSE)
# summary of the model
summary(mca)
##
## Call:
## MCA(X = tea2, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6 Dim.7
## Variance 0.148 0.122 0.090 0.078 0.074 0.071 0.068
## % of var. 9.885 8.103 6.001 5.204 4.917 4.759 4.522
## Cumulative % of var. 9.885 17.988 23.989 29.192 34.109 38.868 43.390
## Dim.8 Dim.9 Dim.10 Dim.11 Dim.12 Dim.13 Dim.14
## Variance 0.065 0.062 0.059 0.057 0.054 0.052 0.049
## % of var. 4.355 4.123 3.902 3.805 3.628 3.462 3.250
## Cumulative % of var. 47.745 51.867 55.769 59.574 63.202 66.664 69.914
## Dim.15 Dim.16 Dim.17 Dim.18 Dim.19 Dim.20 Dim.21
## Variance 0.048 0.047 0.046 0.040 0.038 0.037 0.036
## % of var. 3.221 3.127 3.037 2.683 2.541 2.438 2.378
## Cumulative % of var. 73.135 76.262 79.298 81.982 84.523 86.961 89.339
## Dim.22 Dim.23 Dim.24 Dim.25 Dim.26 Dim.27
## Variance 0.035 0.031 0.029 0.027 0.021 0.017
## % of var. 2.323 2.055 1.915 1.821 1.407 1.139
## Cumulative % of var. 91.662 93.717 95.633 97.454 98.861 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3 ctr
## 1 | -0.541 0.658 0.143 | -0.149 0.061 0.011 | -0.306 0.347
## 2 | -0.361 0.293 0.133 | -0.078 0.017 0.006 | -0.633 1.483
## 3 | 0.073 0.012 0.003 | -0.169 0.079 0.018 | 0.246 0.224
## 4 | -0.572 0.735 0.235 | 0.018 0.001 0.000 | 0.203 0.153
## 5 | -0.253 0.144 0.079 | -0.118 0.038 0.017 | 0.006 0.000
## 6 | -0.684 1.053 0.231 | 0.032 0.003 0.001 | -0.018 0.001
## 7 | -0.111 0.027 0.022 | -0.182 0.090 0.059 | -0.207 0.159
## 8 | -0.210 0.099 0.043 | -0.068 0.013 0.004 | -0.421 0.655
## 9 | 0.118 0.031 0.012 | 0.229 0.144 0.044 | -0.538 1.070
## 10 | 0.258 0.150 0.045 | 0.478 0.627 0.156 | -0.482 0.861
## cos2
## 1 0.046 |
## 2 0.409 |
## 3 0.038 |
## 4 0.030 |
## 5 0.000 |
## 6 0.000 |
## 7 0.077 |
## 8 0.174 |
## 9 0.244 |
## 10 0.158 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr cos2 v.test
## breakfast | 0.166 0.495 0.025 2.756 | -0.166 0.607 0.026 -2.764 |
## Not.breakfast | -0.153 0.457 0.025 -2.756 | 0.154 0.560 0.026 2.764 |
## Not.tea time | -0.498 4.053 0.192 -7.578 | 0.093 0.174 0.007 1.423 |
## tea time | 0.386 3.142 0.192 7.578 | -0.072 0.135 0.007 -1.423 |
## evening | 0.319 1.307 0.053 3.985 | -0.058 0.053 0.002 -0.728 |
## Not.evening | -0.167 0.683 0.053 -3.985 | 0.030 0.028 0.002 0.728 |
## lunch | 0.659 2.385 0.075 4.722 | -0.390 1.018 0.026 -2.793 |
## Not.lunch | -0.113 0.410 0.075 -4.722 | 0.067 0.175 0.026 2.793 |
## dinner | -0.661 1.146 0.033 -3.136 | 0.796 2.025 0.048 3.774 |
## Not.dinner | 0.050 0.086 0.033 3.136 | -0.060 0.152 0.048 -3.774 |
## Dim.3 ctr cos2 v.test
## breakfast -0.483 6.900 0.215 -8.017 |
## Not.breakfast 0.445 6.369 0.215 8.017 |
## Not.tea time 0.265 1.886 0.054 4.027 |
## tea time -0.205 1.462 0.054 -4.027 |
## evening 0.451 4.312 0.106 5.640 |
## Not.evening -0.236 2.254 0.106 -5.640 |
## lunch 0.301 0.822 0.016 2.160 |
## Not.lunch -0.052 0.141 0.016 -2.160 |
## dinner 0.535 1.235 0.022 2.537 |
## Not.dinner -0.040 0.093 0.022 -2.537 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## breakfast | 0.025 0.026 0.215 |
## tea.time | 0.192 0.007 0.054 |
## evening | 0.053 0.002 0.106 |
## lunch | 0.075 0.026 0.016 |
## dinner | 0.033 0.048 0.022 |
## always | 0.045 0.001 0.101 |
## home | 0.005 0.000 0.134 |
## work | 0.112 0.043 0.005 |
## tearoom | 0.372 0.022 0.008 |
## friends | 0.243 0.015 0.103 |
In the summary output we can see…
Eigenvalues: the variances and the percentages of variances retained by each dimension
- there are 27 dimensions
- first dimension retains the total variance most, i.e. 9.9%
- only four first dimensions retains >5% of the variance, total (29.193 %).
Individuals: only first 10 individuals (rows) are shown
- individuals contribution (%) on the dimension is highest on the row2, dimension 3
- cos2 (squared correlations) on the dimensions is highest again on the row2, dimension 3.
Categories table shows:
- the coordinates of the variable categories
- the contribution (%)
- the cos2 (squared correlations)
- v.test values, which follows normal distribution: if the value is above/below +/-1.96, the coordinate is significantly different from zero
- we can see that strongest effect seem to be on the breakfast/no-breakfast selection where v.test values are +/-8.0.
Categorical variables:
- the squared correlations between each variable and dimensions
- values close to one indicates a strong link with variable and dimension
- in this table the highest value 0.372 is for “tearoom”.
plot(mca, invisible=c("ind"), habillage="quali")
plot(mca,invisible=c("quali.sup","quanti.sup"),cex=0.8)
plotellipses(mca,keepvar="Tea")
There are various plotting options. I made three different biplots.
Let’s first focus on plot #1. It shows individual variable categories in relation to dimensions 1 and 2. We’ll see that there are three prominent categories:
- teashop (tea is purchased merely from tea shop; never from chain store)
- tea is unpackaged (‘never from bag’)
- high tea price (‘p_upscale’)
These categories have distinct location on dimension 2, but they are somewhat similar to each other.
Plot #2 shows “individual” as well. We can see, for example that row 273 is a bit different than other cases what comes to location in dimension 1. Otherwise there are no rows that are clearly nonsimilar than other categories.
Plot #3 show individuals in relation to dimensions 1 and 2, but cases are marked with different colors indication tea types (black, Earl Gray, green). We can see that these three groups are are located differently in this 2-dimensional space. Kind of cluster centers are shown with ellipses for these three categories as well.
Conclusion. Selecting and drinking tea is complicated and multidimensional phenomenon.
This week’s exercise is about repeated measures analysis. Tasks include:
Setting working path, loading library and data set.
#Setting working path
setwd("/home/ls/R/projekteja/IODS-project/")
According to instructions, we are going to implement
* the analyses of Chapter 8 of MABS using the RATS data
* the analyses of Chapter 9 of MABS using the BPRS data.
RATS data is from a nutrition study conducted in three groups of rats (Crowder and Hand, 1990), where weights of rats has been measured several times.
#Reading datasets from the files
RATSL <- read.table("./data/RATSL.rData")
str(RATSL)
## 'data.frame': 176 obs. of 5 variables:
## $ ID : int 1 2 3 4 5 6 7 8 9 10 ...
## $ Group : int 1 1 1 1 1 1 1 1 2 2 ...
## $ WD : Factor w/ 11 levels "WD1","WD15","WD22",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ Weight: int 240 225 245 260 255 260 275 245 410 405 ...
## $ Time : int 1 1 1 1 1 1 1 1 1 1 ...
RATSL$ID <- factor(RATSL$ID)
RATSL$Group <- factor(RATSL$Group)
library(ggplot2)
library(dplyr)
table(RATSL$Group, RATSL$ID)
##
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
## 1 11 11 11 11 11 11 11 11 0 0 0 0 0 0 0 0
## 2 0 0 0 0 0 0 0 0 11 11 11 11 0 0 0 0
## 3 0 0 0 0 0 0 0 0 0 0 0 0 11 11 11 11
timelevels <- RATSL$Time %>% unique()
# Draw the plot (original values)
ggplot(RATSL, aes(x = Time, y = Weight, linetype = ID)) +
geom_point(shape="circle", color="red", size=0.75) +
geom_line() +
scale_linetype_manual(values = rep(1:8, times=10)) +
facet_grid(. ~ Group, labeller = label_both) +
theme_light() +
theme(legend.position = "none") +
scale_x_continuous(name="Time (days)", breaks=timelevels, minor_breaks=NULL) +
scale_y_continuous(name="Weight (grams)", limits = c(min(RATSL$Weight), max(RATSL$Weight)))
From the ID*Time cross tabulation we can see that each ID has 11 observations in the long form data, i.e. each value is non-missing. Eight rats are in group 1, 4 in group 2 and 4 group 3.
So-called spaghetti plot is created. It shows that in average values are in different level in group 1 vs. 1 & 2. Groups 1 and 2 seems to be quite similar. Group 2 has single rat having higher values than others. In general, weights are increasing.
Quote from Data Camp:
An important effect we want to take notice is how the subjects who have higher BPRS values at the beginning tend to have higher values throughout the study. This phenomenon is generally referred to as tracking.
In this context tracking would mean that rats having high weight at beginning tend to have high values in future as well. This sounds plausible feature for such measure as weight.
Let’s plot standardized values using the formula
\(standardised(x) = \frac{x - mean(x)}{ sd(x)}\)
This lets us easier observe possible tracking.
# Standardise rat weight
RATSL <- RATSL %>%
group_by(Time) %>%
mutate(Weight_std = (Weight-mean(Weight))/sd(Weight)) %>%
ungroup()
# Glimpse the data
glimpse(RATSL)
## Rows: 176
## Columns: 6
## $ ID <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, …
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, 1…
## $ WD <fct> WD1, WD1, WD1, WD1, WD1, WD1, WD1, WD1, WD1, WD1, WD1, WD1…
## $ Weight <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 555…
## $ Time <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, 8…
## $ Weight_std <dbl> -1.0011429, -1.1203857, -0.9613953, -0.8421525, -0.8819001…
# Plot again with the standardised bprs
ggplot(RATSL, aes(x = Time, y = Weight_std, linetype = ID)) +
geom_point(shape="circle", color="red", size=0.75) +
geom_line() +
scale_linetype_manual(values = rep(1:8, times=10)) +
facet_grid(. ~ Group, labeller = label_both) +
theme_light() +
theme(legend.position = "none") +
scale_x_continuous(name="Time (days)", breaks=timelevels, minor_breaks=NULL) +
scale_y_continuous(name="Standardized weight (grams)", limits = c(min(RATSL$Weight_std), max(RATSL$Weight_std)))
Lines created from standardized values are more stationary. Tracking effect can be seen clearly, values of the same rat mostly remains at fixed level.
# Number of subjects
# Probably this is supposed to be a group n, so this is not correct.
n <- RATSL$ID %>% unique() %>% length()
# Summary data with mean and standard error of bprs by treatment and week
RATSS <- RATSL %>%
group_by(Group, Time) %>%
summarise( mean = mean(Weight), se = sd(Weight)/sqrt(n) ) %>%
ungroup()
## `summarise()` regrouping output by 'Group' (override with `.groups` argument)
# Glimpse the data
glimpse(RATSS)
## Rows: 33
## Columns: 4
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, …
## $ Time <int> 1, 8, 15, 22, 29, 36, 43, 44, 50, 57, 64, 1, 8, 15, 22, 29, 36,…
## $ mean <dbl> 250.625, 255.000, 254.375, 261.875, 264.625, 265.000, 267.375, …
## $ se <dbl> 3.805394, 3.273268, 2.868977, 3.400204, 2.764370, 2.945942, 2.7…
# Plot the mean profiles
ggplot(RATSS, aes(x = Time, y = mean, linetype = Group, shape = Group)) +
geom_line() +
theme_light() +
scale_linetype_manual(values = c(1,2,3)) +
geom_point(size=3) +
scale_shape_manual(values = c(1,2,3)) +
geom_errorbar(aes(ymin = mean - se, ymax = mean + se, linetype="1"), width=0.3) +
scale_x_continuous(name="Time (days)", breaks=timelevels, minor_breaks=NULL) +
scale_y_continuous(name = "mean(Weight) +/- se(Weight)")
Plot shows as that each group has distinct level of Weight. Group 1 has lower values, but groups 2 & 3 seem to differ from each other as well. Group 2 has highest variation, shown as standard errors of the means here, group 1 variation is very low. In each group, mean weight is mainly growing and ends up to higher level that at the beginning. My guess is that with Repeated measures ANOVA and all time points Time would statistically significant factor, Group as well but not Time * Group interaction.
Quote from MABS book: “The summary measure method operates by transforming the repeated measurements made on each individual in the study into a single value that captures some essential feature of the individual’s response over time. Analysis then proceeds by applying standard univariate methods to the summary measures from the sample of subjects (see later examples). The approach has been in use for many years, and is described in Oldham (1962), Yates (1982) and Matthews et al. (1990).”
We need to follow book analyses. Let’s start with box plots before we do more.
# Plot the mean profiles
ggplot(RATSL, aes(x=as.factor(Time), y=Weight, fill=Group)) +
geom_boxplot(outlier.size=1) +
stat_summary(fun="mean", geom="point", shape=5, size=2, position=position_dodge(width=0.75),color="black") +
theme_light() +
scale_x_discrete(name="Time (days)", breaks=timelevels) +
scale_y_continuous(name = "mean(Weight) +/- se(Weight)")
Diamonds are indicating group means values at that specific time point. Please note that x-axis is not in absolute scale anymore!
To mimic analyses in MABS book, let’s apply summary measure approach to all weight values after 1st day of diet. The mean of days 8 to 64 will be then chosen summary measure. We’ll first calculate this measure and then look at boxplots of the measure for each treatment group. The resulting plot is shown soon.
RATL64S <- RATSL %>%
filter(Time > 1) %>%
group_by(Group, ID) %>%
summarise(mean=mean(Weight) ) %>%
ungroup()
## `summarise()` regrouping output by 'Group' (override with `.groups` argument)
glimpse(RATL64S)
## Rows: 16
## Columns: 3
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3
## $ ID <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
## $ mean <dbl> 263.2, 238.9, 261.7, 267.2, 270.9, 276.2, 274.6, 267.5, 443.9, …
# Draw a boxplot of the mean versus treatment
ggplot(RATL64S, aes(x=Group, y=mean, fill=Group)) +
geom_boxplot() +
stat_summary(fun="mean", geom="point", shape=23, size=4, fill="white") +
scale_y_continuous(name="mean(Weight), weeks 8-64")
# Create a new data by filtering the outlier and adjust the ggplot code the draw the plot again with the new data
RATL64S1 <- RATL64S %>%
filter(mean < 550)
# Draw a boxplot of the mean versus treatment
ggplot(RATL64S1, aes(x=Group, y=mean, fill=Group)) +
geom_boxplot() +
stat_summary(fun="mean", geom="point", shape=23, size=4, fill="white") +
scale_y_continuous(name="mean(Weight), weeks 8-64")
As was seen, single outlier mean value of ~600 was removed and plot was recreated. BTW, that high value was coming from rat #12 in group 2 and this rat is now removed from the analysis.
# Fit the linear model with the mean as the response
fit1 <- lm(mean ~ Group, data=RATL64S1)
# Compute the analysis of variance table for the fitted model with anova()
anova(fit1)
## Analysis of Variance Table
##
## Response: mean
## Df Sum Sq Mean Sq F value Pr(>F)
## Group 2 207659 103830 501.81 2.721e-12 ***
## Residuals 12 2483 207
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Post-hoc comparisons, Tukey HSD
TukeyHSD(aov(lm(mean ~ Group, data = RATL64S1)))
## Tukey multiple comparisons of means
## 95% family-wise confidence level
##
## Fit: aov(formula = lm(mean ~ Group, data = RATL64S1))
##
## $Group
## diff lwr upr p adj
## 2-1 187.375 161.39457 213.3554 0.00e+00
## 3-1 262.475 238.97481 285.9752 0.00e+00
## 3-2 75.100 45.79012 104.4099 4.98e-05
That was 1-way ANOVA, predicting rat’s mean weight value of days 8-64 in grams with diet group, with single rat excluded. ANOVA table shows that Group effect is statistically significant (p<0.001). Difference between at least two groups is statistically significant.
Pairwise comparisons (using Tukey’s HSD method) shows that each group differs statistically (p<0.001) from all other groups.
# First we need to import original data
RATS <- read.csv("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/rats.txt", sep="\t")
# Add the baseline from the original data as a new variable to the summary data
RATL64S2 <- RATL64S %>% mutate(Baseline = RATS$WD1)
# Fit the linear model with the mean as the response
fit2 <- lm(mean ~ Baseline + Group, data=RATL64S2)
# Compute the analysis of variance table for the fitted model with anova()
anova(fit2)
## Analysis of Variance Table
##
## Response: mean
## Df Sum Sq Mean Sq F value Pr(>F)
## Baseline 1 253625 253625 1859.8201 1.57e-14 ***
## Group 2 879 439 3.2219 0.07586 .
## Residuals 12 1636 136
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
So it was 1-way ANCOVA, predicting rat’s mean weight value of days 8-64 in grams with diet group, witch single rat excluded. Day 1 weight was used as a baseline covariate value.
ANOVA table shows that Baseline level was statistically significant predictor for later mean weight. After controlling for baseline value, group effect wasn’t significant anymore (p=0.08).
Using dataset taken from Davis (Davis, C. S. (2002). Statistical Methods for the Analysis of Repeated Measurements. Springer, New York.), where (Quote from MABS book) “40 male subjects were randomly assigned to one of two treatment groups and each subject was rated on the brief psychiatric rating scale (BPRS) measured before treatment began (week 0) and then at weekly intervals for eight weeks. The BPRS assesses the level of 18 symptom constructs such as hostility, suspiciousness, hallucinations and grandiosity; each of these is rated from one (not present) to seven (extremely severe). The scale is used to evaluate patients suspected of having schizophrenia.”
We need to change ID code to have unique values for each subject. Thanks to Jukke Kaaronen for pointing this out.
#Reading datasets from the files
BPRSL <- read.table("./data/BPRSL.rData")
#New ID and varible roles
BPRSL$subject_old <- BPRSL$subject
BPRSL$subject <- BPRSL$subject+(100*BPRSL$treatment)
BPRSL$treatment <- factor(BPRSL$treatment)
BPRSL$subject <- factor(BPRSL$subject)
str(BPRSL)
## 'data.frame': 360 obs. of 6 variables:
## $ treatment : Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
## $ subject : Factor w/ 40 levels "101","102","103",..: 1 2 3 4 5 6 7 8 9 10 ...
## $ weeks : Factor w/ 9 levels "week0","week1",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ bprs : int 42 58 54 55 72 48 71 30 41 57 ...
## $ week : int 0 0 0 0 0 0 0 0 0 0 ...
## $ subject_old: int 1 2 3 4 5 6 7 8 9 10 ...
Now treatment group 1 has subject ID’s 101-120 and group 2 ID’s 201-220. Variable is subject while the original one is subject_old.
Data is in long form, we don’t need to transpose it.
#Spaghetti plot
ggplot(BPRSL, aes(x=week, y=bprs, linetype=subject)) +
geom_line() +
scale_linetype_manual(values = rep(1:10, times=4)) +
facet_grid(. ~ treatment, labeller = label_both) +
theme_bw() +
theme(legend.position = "none") +
scale_y_continuous(name = "Brief psychiatric rating scale (BPRS)")
“Spaghetti plot” shows lines for each subject and how BPRS values are changing over times. We can see that variance is high in both groups, values tend mostly decrease and there is not obvious mean difference between treatments.
# Number of weeks, baseline (week 0) included
# Probably this is supposed to be a group n, so this is not correct.
n <- BPRSL$week %>% unique() %>% length()
# Summary data with mean and standard error of bprs by treatment and week
BPRSS <- BPRSL %>%
group_by(treatment, week) %>%
summarise( mean = mean(bprs), se = sd(bprs)/sqrt(n) ) %>%
ungroup()
## `summarise()` regrouping output by 'treatment' (override with `.groups` argument)
#Mean profile plot
ggplot(BPRSS, aes(x = week+(0.1*(as.numeric(treatment)-2)), y = mean, linetype = treatment, shape = treatment, color=treatment)) +
geom_line() +
scale_linetype_manual(values = c(1,2)) +
geom_point(size=3) +
scale_shape_manual(values = c(1,2)) +
geom_errorbar(aes(ymin = mean - se, ymax = mean + se, linetype="1"), width=0.3) +
theme_bw() +
theme(legend.position = c(0.8,0.8)) +
scale_x_continuous(name = "Week", breaks=c(0:8)) +
scale_y_continuous(name = "mean(bprs) +/- se(bprs)")
Mean profile plot with std.err.mean bars is useful. Now it indicates change in time (values are decreasing on average). Variance seems to be a bit lower at the end in group 1. There is no evident group difference, although gap slightly grows at the end.
Let’s first conduct basic linear model, which is not taking longitudinal nature of the data into account at all.
# create a regression model RATS_reg
BPRS_reg <- lm(bprs ~ week + treatment, data=BPRSL)
# print out a summary of the model
summary(BPRS_reg)
##
## Call:
## lm(formula = bprs ~ week + treatment, data = BPRSL)
##
## Residuals:
## Min 1Q Median 3Q Max
## -22.454 -8.965 -3.196 7.002 50.244
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 46.4539 1.3670 33.982 <2e-16 ***
## week -2.2704 0.2524 -8.995 <2e-16 ***
## treatment2 0.5722 1.3034 0.439 0.661
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 12.37 on 357 degrees of freedom
## Multiple R-squared: 0.1851, Adjusted R-squared: 0.1806
## F-statistic: 40.55 on 2 and 357 DF, p-value: < 2.2e-16
Summary table shows that week effect is statistically significant, group effect not. But within-subject correlation is not taken into account, so this is not reasonable analysis and we should pay more attention to it.
# access library lme4
library(lme4)
## Loading required package: Matrix
##
## Attaching package: 'Matrix'
## The following objects are masked from 'package:tidyr':
##
## expand, pack, unpack
# Create a random intercept model
BPRS_ref <- lmer(bprs ~ week + treatment + (1 | subject), data = BPRSL, REML = FALSE)
# Print the summary of the model
summary(BPRS_ref)
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: bprs ~ week + treatment + (1 | subject)
## Data: BPRSL
##
## AIC BIC logLik deviance df.resid
## 2582.9 2602.3 -1286.5 2572.9 355
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.27506 -0.59909 -0.06104 0.44226 3.15835
##
## Random effects:
## Groups Name Variance Std.Dev.
## subject (Intercept) 97.39 9.869
## Residual 54.23 7.364
## Number of obs: 360, groups: subject, 40
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 46.4539 2.3521 19.750
## week -2.2704 0.1503 -15.104
## treatment2 0.5722 3.2159 0.178
##
## Correlation of Fixed Effects:
## (Intr) week
## week -0.256
## treatment2 -0.684 0.000
Random intercept model contains explanatory variables week and treatment. Model allows the linear regression fit for each subject to differ in intercept from other subjects.
Subject random effect variance is high indicating the considerable variation in the intercepts of the regression fits of the individual subject growth profiles.
The estimated regression parameters for week is high and significant. Regression parameter for treatment is lower and non-significant. This just like in linear regression model, but now standard errors are a bit lower.
# Create a random intercept and random slope model
BPRS_ref1 <- lmer(bprs ~ week + treatment + (week | subject), data = BPRSL, REML = FALSE)
# Print the summary of the model
summary(BPRS_ref1)
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: bprs ~ week + treatment + (week | subject)
## Data: BPRSL
##
## AIC BIC logLik deviance df.resid
## 2523.2 2550.4 -1254.6 2509.2 353
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.4655 -0.5150 -0.0920 0.4347 3.7353
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## subject (Intercept) 167.827 12.955
## week 2.331 1.527 -0.67
## Residual 36.747 6.062
## Number of obs: 360, groups: subject, 40
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 45.9830 2.6470 17.372
## week -2.2704 0.2713 -8.370
## treatment2 1.5139 3.1392 0.482
##
## Correlation of Fixed Effects:
## (Intr) week
## week -0.545
## treatment2 -0.593 0.000
# perform an ANOVA test on the two models
anova(BPRS_ref1, BPRS_ref)
## Data: BPRSL
## Models:
## BPRS_ref: bprs ~ week + treatment + (1 | subject)
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
## npar AIC BIC logLik deviance Chisq Df Pr(>Chisq)
## BPRS_ref 5 2582.9 2602.3 -1286.5 2572.9
## BPRS_ref1 7 2523.2 2550.4 -1254.6 2509.2 63.663 2 1.499e-14 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Fitting a random intercept and random slope model allows the linear regression fits for each individual to differ in intercept but also in slope. This way it is possible to account for the individual differences in the rats’ growth profiles, but also the effect of time.
Results are pretty much the same as was iweth random intercept only model. Fixed effects estimates are more or less the same, although treatment effect estimate is larger with a bit higher error. Treatment effect is still non-significant.
The likelihood ratio test for the random intercept model versus the random intercept and slope model gives a chi-squared statistic of 229.47 with 2 degrees of freedom (DF), and the associated p-value is very small. The random intercept and slope model provides a better fit for these data. In other words, simplified model is significantly worse.
BPRS_ref2 <- lmer(bprs ~ week + treatment + (week | subject) + week*treatment, data = BPRSL, REML = FALSE)
# print a summary of the model
summary(BPRS_ref2)
## Linear mixed model fit by maximum likelihood ['lmerMod']
## Formula: bprs ~ week + treatment + (week | subject) + week * treatment
## Data: BPRSL
##
## AIC BIC logLik deviance df.resid
## 2523.5 2554.5 -1253.7 2507.5 352
##
## Scaled residuals:
## Min 1Q Median 3Q Max
## -2.4747 -0.5256 -0.0866 0.4435 3.7884
##
## Random effects:
## Groups Name Variance Std.Dev. Corr
## subject (Intercept) 164.204 12.814
## week 2.203 1.484 -0.66
## Residual 36.748 6.062
## Number of obs: 360, groups: subject, 40
##
## Fixed effects:
## Estimate Std. Error t value
## (Intercept) 47.8856 2.9840 16.047
## week -2.6283 0.3752 -7.006
## treatment2 -2.2911 4.2200 -0.543
## week:treatment2 0.7158 0.5306 1.349
##
## Correlation of Fixed Effects:
## (Intr) week trtmn2
## week -0.668
## treatment2 -0.707 0.473
## wek:trtmnt2 0.473 -0.707 -0.668
# perform an ANOVA test on the two models
anova(BPRS_ref2, BPRS_ref1)
## Data: BPRSL
## Models:
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
## BPRS_ref2: bprs ~ week + treatment + (week | subject) + week * treatment
## npar AIC BIC logLik deviance Chisq Df Pr(>Chisq)
## BPRS_ref1 7 2523.2 2550.4 -1254.6 2509.2
## BPRS_ref2 8 2523.5 2554.6 -1253.7 2507.5 1.78 1 0.1821
This was a random intercept and slope model which allows for a group × time (i.e. treatment x week) interaction.
Week effect is still statistically significant, but treatment effect is not, as well as interaction is not significant, not even close.
Earlier mean profile plot tells the same story. Different shape of mean curves would have been an indication for interaction. No there wasn’t such.
Likelihood ratio shows comparison of this one and previous model. Observerd significance level of 0.18 means interaction model is not providing a a better fit for this BPRS data. We should stick to random intercept and slope model.
As in MABS book, let’s plot observed values and predicted values from interaction model.
library(ggpubr) #for get_legend
library(gridExtra) #for multiple grobs
##
## Attaching package: 'gridExtra'
## The following object is masked from 'package:dplyr':
##
## combine
# draw the plot of BPRS with the observed Weight values
p_dummy <- ggplot(BPRSL, aes(x=week, y=bprs, color=treatment)) +
geom_line()
leg <- get_legend(p_dummy)
x_scale <- scale_x_continuous(name = "Time (days)", breaks=c(0:8), minor_breaks=NULL)
y_scale <- scale_y_continuous(name = "Brief psychiatric rating scale (BPRS)", limits=c(15,100), breaks=c(20,40,60,80,100))
teema <- theme(legend.position="none", plot.subtitle=element_text(size=rel(0.7)))
p_obs <- ggplot(BPRSL, aes(x=week, y=bprs, group=subject, color=treatment)) +
geom_line(aes(linetype=treatment)) +
theme_bw() +
teema +
labs(
title="Observed",
subtitle="Observed growth rate profiles"
) +
x_scale +
y_scale
# Create a vector of the fitted values
Fitted <- fitted(BPRS_ref2)
# Create a new column fitted to BPRSL
BPRSL$fitted <- Fitted
# draw the plot of BPRSL with the Fitted values of weight
p_fit <- ggplot(BPRSL, aes(x=week, y=fitted, group=subject, color=treatment)) +
geom_line(aes(linetype=treatment)) +
theme_bw() +
teema +
labs(
title="Fitted",
subtitle="Fitted growth rate profiles from the interaction model"
) +
x_scale +
y_scale
grid.arrange(
arrangeGrob(p_obs,
p_fit,
nrow=1,
ncol=2),
padding=unit(1, "lines"),
leg,
nrow=2,
heights=c(0.5,0.15))